-
Notifications
You must be signed in to change notification settings - Fork 32
✅ Add e2e tests for metamodeling #8457
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
✅ Add e2e tests for metamodeling #8457
Conversation
Signed-off-by: Alejandro Parcet <[email protected]>
Signed-off-by: Alejandro Parcet <[email protected]>
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## master #8457 +/- ##
==========================================
+ Coverage 87.23% 87.57% +0.34%
==========================================
Files 1935 2012 +77
Lines 76477 79025 +2548
Branches 1368 1368
==========================================
+ Hits 66717 69209 +2492
- Misses 9354 9410 +56
Partials 406 406
*This pull request uses carry forward flags. Click here to find out more.
Continue to review full report in Codecov by Sentry.
🚀 New features to boost your workflow:
|
pcrespov
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR!
We usually like to include a bit more context to help us track changes, connect related issues, and coordinate work more easily. It shouldn’t take much time.
A few suggestions:
- Assign yourself to the issue (I went ahead and did that for you this time).
- Add the relevant project and labels.
- Include a short description — you can even use AI for that — and fill out the PR template sections where applicable (e.g. enumerate related PO issues )
tests/e2e-playwright/tests/metamodeling/test_response_surface_modeling.py
Outdated
Show resolved
Hide resolved
tests/e2e-playwright/tests/metamodeling/test_response_surface_modeling.py
Outdated
Show resolved
Hide resolved
tests/e2e-playwright/tests/metamodeling/test_response_surface_modeling.py
Outdated
Show resolved
Hide resolved
Signed-off-by: Alejandro Parcet <[email protected]>
…lexpargon/osparc-simcore into add-e2e-tests-for-metamodeling
…hboard each time Signed-off-by: Alejandro Parcet <[email protected]>
|
|
||
| with log_context(logging.INFO, "Waiting for the sampling to complete..."): | ||
| plotly_graph = service_iframe.locator(".js-plotly-plot") | ||
| plotly_graph.wait_for(state="visible", timeout=300000) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am a bit confused here, so you wait for the graph to show up? But do you check if the sample created is included?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the function is created specifically for the test, so it's empty and has no jobs, then we launch a sampling campaign which will update the UI when the jobs are ready, as the runner is a jsonifier and we have timeouts, more than five of them consistently finish when the launching is done, and thus the plot is displayed.
the display of the plot can only happen when you have more than 5 jobs listed and finished.
if we want to wait for every job to finish, we can wait longer to do a refresh or implement a refresh mechanism every 10 seconds to wait for all of them to be complete
do you require the testing to also check the execution for failed jobs? we can fail the test if any job fails
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not saying every job has to finish. But I think it would be best to check if the jobs are listed at least?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So i finally replaced the simple graph approach with one that checks the table for the status to turn to complete, and refreshes it until it happens. after that it selects all the new values and waits for the plot to appear.
| for i in range(count_min): | ||
| input_field = min_inputs.nth(i) | ||
| input_field.fill(str(i + 1)) | ||
| print(f"Filled Min input {i} with value {i + 1}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
use logs, not print. Get it from log_context()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
right, replaced them!
| toast = service_iframe.locator("div.Toastify__toast").filter( | ||
| has_text="Sampling started running successfully, please wait for completion." | ||
| ) | ||
| toast.wait_for(state="visible", timeout=120000) # waits up to 120 seconds |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
look at the style of others. We use constants to parametrize key timeouts and therefore tune them later
| toast.wait_for(state="visible", timeout=120000) # waits up to 120 seconds | ||
|
|
||
| with log_context(logging.INFO, "Waiting for the sampling to complete..."): | ||
| moga_container = service_iframe.locator("[mmux-testid='moga-pareto-plot']") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Q: is this unused on purpose?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
replaced this mechanic with a more elaborated approach
tests/e2e-playwright/tests/metamodeling/test_response_surface_modeling.py
Outdated
Show resolved
Hide resolved
| page.wait_for_timeout(15000) | ||
|
|
||
| # # then we wait a long time | ||
| # page.wait_for_timeout(1 * MINUTE) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
rm comments if not used. No comments with code allowed
| elif message_2.is_visible(): | ||
| message_2.wait_for(state="detached", timeout=300000) | ||
| else: | ||
| print("No blocking message found — continuing.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
log
| service_iframe.locator('[mmux-testid="run-sampling-btn"]').click() | ||
| page.wait_for_timeout(1000) | ||
|
|
||
| with log_context(logging.INFO, "Waiting for the sampling to complete..."): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FYI: There is no need to log everything as a start/stop context. Note that you can also log steps within it using the returned object
with log_context(
logging.INFO,
f"Convert {project_uuid=} / {_STUDY_FUNCTION_NAME} to a function",
) as ctx:
...
ctx.logger.info(
"Created function: %s", f"{json.dumps(function_data['data'], indent=2)}"
)
sanderegg
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good, there are a few things I would suggest:
- no need to wait so much everywhere with timeouts in the production test (it shall run as fast as possible) if there is no functional need (click already has an integrated 30 seconds default timeout)
mmux-testidthis is very nice! please check the locations where you do not use it as this is fragile and will for sure break as soon as you modify the UI- ideally after the test everything should be deleted and cleaned so we do not accumulate e2e data (and actually pay for it)
tests/e2e-playwright/tests/metamodeling/test_response_surface_modeling.py
Outdated
Show resolved
Hide resolved
tests/e2e-playwright/tests/metamodeling/test_response_surface_modeling.py
Outdated
Show resolved
Hide resolved
tests/e2e-playwright/tests/metamodeling/test_response_surface_modeling.py
Show resolved
Hide resolved
|
Signed-off-by: Alejandro Parcet <[email protected]>
Signed-off-by: Alejandro Parcet <[email protected]>
|
pcrespov
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thx
| page.keyboard.press("Tab") | ||
| page.wait_for_timeout(1000) | ||
|
|
||
| if "moga" in service_key.lower(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
MINOR: highlight these keywords by adding them as CONSTANTS e.g.
EXPECTED_SERVICE_KEY
sanderegg
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks!



What do these changes do?
Added test for the Meta Modeling services:
to this end,
mmux-testids where added to the meta modeling service.Related issue/s
How to test
The changes include a new test phase which executes e2e playwright tests on the osparc service, by creating a project with JSONIFIER, creating a function from this proyect, and then launching MMuX to populate the function with jobs and use them to test each of the tools MMuX enables
Dev-ops